perm filename SEARLE.RE1[E79,JMC]1 blob
sn#472992 filedate 1979-09-07 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Notes on Searle's "Notes on artificial intelligence"
C00007 ENDMK
Cā;
Notes on Searle's "Notes on artificial intelligence"
I like the Berkeley answer that the system knows and will defend it against
Searle's objections.
1. Suppose that the human doesn't know Chinese, only the set of rules for
making responses to the strings of Chinese characters. Suppose further that
the rules are elaborate enough to behave like an excellent Chinese scholar.
I would then say that this system, though not the "underlying mental system"
of the person understands Chinese. That a person could carry out rules
elaborate enough to behave like a Chinese scholar is implausible given the
limitations of human data processing capability. Of course, AI is not ready
to formulate the required rules anyway.
2. There are some weakly analogous cases, however. Some schizophrenics are said
to have split personalities - the same brain carries out the mental processes
of the several personalities.
3. Searle's problem can be mapped into a purely computer framework. Suppose
a program, like the proposed Advice Taker, formulates what it knows as
sentences of logic and decides what to do by reasoning in this logic. Suppose
further that this program can engage in reasonably intelliience dialog - in
the logical language or even in English. Suppose further that someone gives
the program - in logic or in English - a set of rules for manipulating
pictographs but gives no rules for translating the pictographs into its
basic logical language. Suppose further that the rules again amount to
the behavior of a Chinese scholar. Then the interpreted system knows
Chinese and many facts about Chinese literature while the basic system
doesn't.
Actually, the same problems would arise if the interpreted language were English,
except that the basic system would have to be rather obtuse to obey the
rules given in English for manipulating quoted strings of characters without
noticing that these strings could themselves be interpreted in its base language.
Searle's paper points up the fact that a given brain could be host to several
minds. These minds could either be parallel in the brain in that the %2hardware%1
interprets them separately or they can be organized in layers - where
one mind knows how to obey the rules of another with or without being able to
translate the other's sentences into its own language.
If such a phenomenon were common, ordinary language would not make the "category
mistake" of saying "John saw himself in the mirror", since it would not
identify the personality John with the physical object. Since the phenomenon
probably doesn't actually occur except with computers, ordinary language users make
this mistake only in connection with computes and officialdoms. The
statement "The computer knows X" or "The government knows X" often elicits
the reply "What program are you talking about, do you mean the personnel
program that has just been moved from an IBM 370/168 to an IBM 3033?" or "What
agency do you mean?".